- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000000002000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Winnicki, Anna (2)
-
Bose, Subhonmesh (1)
-
Livesay, Michael (1)
-
Lubars, Joseph (1)
-
Ndrio, Mariola (1)
-
Srikant, R (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
& Andrews-Larson, C. (0)
-
& Archibald, J. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Winnicki, Anna; Lubars, Joseph; Livesay, Michael; Srikant, R (, Operations Research)In many applications of reinforcement learning, the underlying system dynamics are known, but computing the optimal policy is still difficult because the size of the state space can be enormously large. For example, Shannon famously estimated the number of states in chess to be approximately 10^120. To handle such enormous state space sizes, policy iteration algorithms make two major approximations; it is assumed that the value function lies in a lower-dimensional space and that only a few steps of a trajectory (called rollout) are used to evaluate a policy. Using a counterexample, we show that approximations can lead to the divergence of policy iteration. We show that sufficient lookahead in the policy improvement step mitigates this divergence and leads to algorithms with bounded errors. We also show that these errors can be controlled by appropriately choosing an appropriate amount of lookahead and rollout.more » « less
An official website of the United States government
